Reduction from Complementary-Label Learning to Probability Estimates

نویسندگان

چکیده

Complementary-Label Learning (CLL) is a weakly-supervised learning problem that aims to learn multi-class classifier from only complementary labels, which indicate class an instance does not belong. Existing approaches mainly adopt the paradigm of reduction ordinary classification, applies specific transformations and surrogate losses connect CLL back classification. Those approaches, however, face several limitations, such as tendency overfit. In this paper, we sidestep those limitations with novel perspective–reduction probability estimates classes. We prove accurate labels lead good classifiers through simple decoding step. The proof establishes framework estimates. offers explanations key its special cases allows us design improved algorithm more robust in noisy environments. also suggests validation procedure based on quality estimates, offering way validate models CLs. flexible opens wide range unexplored opportunities using deep non-deep for solve CLL. Empirical experiments further verified framework’s efficacy robustness various settings. full paper can be accessed at https://arxiv.org/abs/2209.09500 .

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning to Identify Complementary Products from DBpedia

Identifying the complementary relationship between products, like a cartridge to a printer, is a very useful technique to provide recommendations. These are typically purchased together or within a short time frame and thus online retailers benefit from it. Existing approaches rely heavily on transactions and therefore they suffer from: (1) the cold start problem for new products; (2) the inabi...

متن کامل

Learning from Complementary Labels

Collecting labeled data is costly and thus is a critical bottleneck in real-world classification tasks. To mitigate the problem, we consider a complementary label, which specifies a class that a pattern does not belong to. Collecting complementary labels would be less laborious than ordinary labels since users do not have to carefully choose the correct class from many candidate classes. Howeve...

متن کامل

Learning from Label Preferences

In this paper, we review the framework of learning (from) label preferences, a particular instance of preference learning. Following an introduction to the learning setting, we particularly focus on our own work, which addresses this problem via the learning by pairwise comparison paradigm. From a machine learning point of view, learning by pairwise comparison is especially appealing as it deco...

متن کامل

Confidence measures from local posterior probability estimates

In this paper we introduce a set of related confidence measures for large vocabulary continuous speech recognition (LVCSR) based on local phone posterior probability estimates output by an acceptor HMM acoustic model. In addition to their computational efficiency, these confidence measures are attractive as they may be applied at the state-, phone-, wordor utterance-levels, potentially enabling...

متن کامل

Evaluating Probability Estimates from Decision Trees

Decision trees, a popular choice for classification, have their limitation in providing good quality probability estimates. Typically, smoothing methods such as Laplace or m-estimate are applied at the decision tree leaves to overcome the systematic bias introduced by the frequency-based estimates. An ensemble of decision trees has also been shown to help in reducing the bias and variance in th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2023

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-33377-4_36